Goto

Collaborating Authors

 Story County


Optimizing Navigation And Chemical Application in Precision Agriculture With Deep Reinforcement Learning And Conditional Action Tree

arXiv.org Artificial Intelligence

We introduce a domain-specific reward mechanism that maximizes yield recovery while minimizing chemical usage by effectively handling noisy infection data and enforcing physical field constraints via action masking. We conduct a rigorous empirical evaluation across diverse, realistic biotic stress scenarios, capturing varying infection distributions and severity levels in row-crop fields. The proposed scheme is evaluated thoroughly, showing the framework's effectiveness and robustness. Experimental results demonstrate that our approach significantly reduces non-target spraying, chemical consumption, and operational costs compared to baseline methods. Optimizing Navigation And Chemical Application in Precision Agriculture With Deep Reinforcement Learning And Conditional Action Tree Mahsa Khosravi a, Zhanhong Jiang b, Joshua R Waite b, Sarah Jones c, Hernan Torres c, Arti Singh c, Baskar Ganapathysubramanian b, Asheesh Kumar Singh c, Soumik Sarkar b a Department of Industrial and Manufacturing Systems Engineering, Iowa State University, Ames, Iowa, USA b Department of Mechanical Engineering, Iowa State University, Ames, Iowa, USA c Department of Agronomy, Iowa State University, Ames, Iowa, USAAbstract This paper presents a novel reinforcement learning (RL)-based planning scheme for optimized robotic management of biotic stresses in precision agriculture.


Enhanced Position Estimation in Tactile Internet-Enabled Remote Robotic Surgery Using MOESP-Based Kalman Filter

arXiv.org Artificial Intelligence

Accurately estimating the position of a patient's side robotic arm in real time during remote surgery is a significant challenge, especially within Tactile Internet (TI) environments. This paper presents a new and efficient method for position estimation using a Kalman Filter (KF) combined with the Multivariable Output-Error State Space (MOESP) method for system identification. Unlike traditional approaches that require prior knowledge of the system's dynamics, this study uses the JIGSAW dataset, a comprehensive collection of robotic surgical data, along with input from the Master Tool Manipulator (MTM) to derive the state-space model directly. The MOESP method allows accurate modeling of the Patient Side Manipulator (PSM) dynamics without prior system models, improving the KF's performance under simulated network conditions, including delays, jitter, and packet loss. These conditions mimic real-world challenges in Tactile Internet applications. The findings demonstrate the KF's improved resilience and accuracy in state estimation, achieving over 95 percent accuracy despite network-induced uncertainties.


A Predictive Approach for Enhancing Accuracy in Remote Robotic Surgery Using Informer Model

arXiv.org Artificial Intelligence

Precise and real-time estimation of the robotic arm's position on the patient's side is essential for the success of remote robotic surgery in Tactile Internet (TI) environments. This paper presents a prediction model based on the Transformer-based Informer framework for accurate and efficient position estimation. Additionally, it combines a Four-State Hidden Markov Model (4-State HMM) to simulate realistic packet loss scenarios. The proposed approach addresses challenges such as network delays, jitter, and packet loss to ensure reliable and precise operation in remote surgical applications. The method integrates the optimization problem into the Informer model by embedding constraints such as energy efficiency, smoothness, and robustness into its training process using a differentiable optimization layer. The Informer framework uses features such as ProbSparse attention, attention distilling, and a generative-style decoder to focus on position-critical features while maintaining a low computational complexity of O(L log L). The method is evaluated using the JIGSAWS dataset, achieving a prediction accuracy of over 90 percent under various network scenarios. A comparison with models such as TCN, RNN, and LSTM demonstrates the Informer framework's superior performance in handling position prediction and meeting real-time requirements, making it suitable for Tactile Internet-enabled robotic surgery.


Automatic Extraction of Relevant Road Infrastructure using Connected vehicle data and Deep Learning Model

arXiv.org Artificial Intelligence

In today's rapidly evolving urban landscapes, efficient and accurate mapping of road infrastructure is critical for optimizing transportation systems, enhancing road safety, and improving the overall mobility experience for drivers and commuters. Yet, a formidable bottleneck obstructs progress - the laborious and time-intensive manual identification of intersections. Simply considering the shear number of intersections that need to be identified, and the labor hours required per intersection, the need for an automated solution becomes undeniable. To address this challenge, we propose a novel approach that leverages connected vehicle data and cutting-edge deep learning techniques. By employing geohashing to segment vehicle trajectories and then generating image representations of road segments, we utilize the YOLOv5 (You Only Look Once version 5) algorithm for accurate classification of both straight road segments and intersections. Experimental results demonstrate an impressive overall classification accuracy of 95%, with straight roads achieving a remarkable 97% F1 score and intersections reaching a 90% F1 score. This approach not only saves time and resources but also enables more frequent updates and a comprehensive understanding of the road network. Our research showcases the potential impact on traffic management, urban planning, and autonomous vehicle navigation systems. The fusion of connected vehicle data and deep learning models holds promise for a transformative shift in road infrastructure mapping, propelling us towards a smarter, safer, and more connected transportation ecosystem.


A Survey of Techniques for Optimizing Transformer Inference

arXiv.org Artificial Intelligence

Recent years have seen a phenomenal rise in performance and applications of transformer neural networks. The family of transformer networks, including Bidirectional Encoder Representations from Transformer (BERT), Generative Pretrained Transformer (GPT) and Vision Transformer (ViT), have shown their effectiveness across Natural Language Processing (NLP) and Computer Vision (CV) domains. Transformer-based networks such as ChatGPT have impacted the lives of common men. However, the quest for high predictive performance has led to an exponential increase in transformers' memory and compute footprint. Researchers have proposed techniques to optimize transformer inference at all levels of abstraction. This paper presents a comprehensive survey of techniques for optimizing the inference phase of transformer networks. We survey techniques such as knowledge distillation, pruning, quantization, neural architecture search and lightweight network design at the algorithmic level. We further review hardware-level optimization techniques and the design of novel hardware accelerators for transformers. We summarize the quantitative results on the number of parameters/FLOPs and accuracy of several models/techniques to showcase the tradeoff exercised by them. We also outline future directions in this rapidly evolving field of research. We believe that this survey will educate both novice and seasoned researchers and also spark a plethora of research efforts in this field.


Dive into Deep Learning

arXiv.org Artificial Intelligence

Just a few years ago, there were no legions of deep learning scientists developing intelligent products and services at major companies and startups. When the youngest among us (the authors) entered the field, machine learning did not command headlines in daily newspapers. Our parents had no idea what machine learning was, let alone why we might prefer it to a career in medicine or law. Machine learning was a forward-looking academic discipline with a narrow set of real-world applications. And those applications, e.g., speech recognition and computer vision, required so much domain knowledge that they were often regarded as separate areas entirely for which machine learning was one small component. Neural networks then, the antecedents of the deep learning models that we focus on in this book, were regarded as outmoded tools. In just the past five years, deep learning has taken the world by surprise, driving rapid progress in fields as diverse as computer vision, natural language processing, automatic speech recognition, reinforcement learning, and statistical modeling. With these advances in hand, we can now build cars that drive themselves with more autonomy than ever before (and less autonomy than some companies might have you believe), smart reply systems that automatically draft the most mundane emails, helping people dig out from oppressively large inboxes, and software agents that dominate the worldสผs best humans at board games like Go, a feat once thought to be decades away. Already, these tools exert ever-wider impacts on industry and society, changing the way movies are made, diseases are diagnosed, and playing a growing role in basic sciences--from astrophysics to biology.


Slowly Varying Regression under Sparsity

arXiv.org Machine Learning

We consider the problem of parameter estimation in slowly varying regression models with sparsity constraints. We formulate the problem as a mixed integer optimization problem and demonstrate that it can be reformulated exactly as a binary convex optimization problem through a novel exact relaxation. The relaxation utilizes a new equality on Moore-Penrose inverses that convexifies the non-convex objective function while coinciding with the original objective on all feasible binary points. This allows us to solve the problem significantly more efficiently and to provable optimality using a cutting plane-type algorithm. We develop a highly optimized implementation of such algorithm, which substantially improves upon the asymptotic computational complexity of a straightforward implementation. We further develop a heuristic method that is guaranteed to produce a feasible solution and, as we empirically illustrate, generates high quality warm-start solutions for the binary optimization problem. We show, on both synthetic and real-world datasets, that the resulting algorithm outperforms competing formulations in comparable times across a variety of metrics including out-of-sample predictive performance, support recovery accuracy, and false positive rate. The algorithm enables us to train models with 10,000s of parameters, is robust to noise, and able to effectively capture the underlying slowly changing support of the data generating process.


How I Predicted A Dataset Without Encoding Or Scaling Features

#artificialintelligence

If you want a quick model that does not require you to separate the numerical columns from categorical ones, does not require you to ordinal encode or one hot encode them, and does not require you to standardise the independent variables? If your answer is yes then maybe you need to try CatBoost. CatBoost is an open source library, based on the concept of gradient boosting, which has been developed by the Russian company, Yandex. CatBoost is an especially powerful library because it yields state-of-the-art results without extensive data training typically required by other machine learning methods, and provides powerful out-of-the-box support for the more descriptive data formats that accompany many business problems. In order to show that CatBoost can make predictions on categorical data that has not been encoded and scaled, I selected a very popular dataset to experiment on: Kaggle's Ames House Price dataset, which forms part of one of their competitions on advanced regression, the link being found here:- House Prices -- Advanced Regression Techniques Kaggle "Ask a home buyer to describe their dream house, and they probably won't begin with the height of the basement ceiling or the proximity to an east-west railroad. But this playground competition's dataset proves that much more influences price negotiations than the number of bedrooms or a white-picket fence. With 79 explanatory variables describing (almost) every aspect of residential homes in Ames, Iowa, this competition challenges you to predict the final price of each home. The Ames Housing dataset was compiled by Dean De Cock for use in data science education. It's an incredible alternative for data scientists looking for a modernized and expanded version of the often cited Boston Housing dataset."


Chasing storm data: machine learning looks for useful data in U.S. thunderstorm reports

#artificialintelligence

Newswise -- AMES, Iowa โ€“ Bill Gallus has been known to chase a summer storm or two. But he didn't have to go after this one. On July 17, 2019, a thunderstorm approached the Iowa State University campus. Gallus, a professor of geological and atmospheric sciences, headed to the roof above his office in the Agronomy Building. And he didn't forget a camera.


Machine learning in agriculture: scientists are teaching computers to diagnose soybean stress

#artificialintelligence

AMES, Iowa - Iowa State University scientists are working toward a future in which farmers can use unmanned aircraft to spot, and even predict, disease and stress in their crops. Their vision relies on machine learning, an automated process in which technology can help farmers respond to plant stress more efficiently. Arti Singh, an adjunct assistant professor of agronomy, is leading a multi-disciplinary research team that recently received a three-year, $499,845 grant from the U.S Department of Agriculture's National Institute of Food and Agriculture to develop machine learning technology that could automate the ability of farmers to diagnose a range of major stresses in soybeans. The technology under development would make use of cameras attached to unmanned aerial vehicles, or UAVs, to gather birds-eye images of soybean fields. A computer application would automatically analyze the images and alert the farmer of trouble spots.